在本文中,我们建议通过从GAN中学习独立反向潜在代码的轨迹来对视频动态进行建模。整个序列被视为初始潜在代码连续轨迹的离散时间观察,通过将每个潜在代码视为移动粒子,而潜在空间是高维动态系统。因此,代表不同框架的潜在代码被重新重新构成初始框架的状态转换,可以通过神经普通微分方程对其进行建模。学习的连续轨迹使我们能够执行无限的框架插值和一致的视频操作。后一个任务是重新引入的,用于视频编辑,其优势是仅在所有框架上保持时间一致性时才需要将核心操作应用于第一帧。广泛的实验表明,我们的方法实现了最先进的性能,但计算较少。
translated by 谷歌翻译
反转生成对抗网络(GAN)可以使用预训练的发电机来促进广泛的图像编辑任务。现有方法通常采用gan的潜在空间作为反转空间,但观察到空间细节的恢复不足。在这项工作中,我们建议涉及发电机的填充空间,以通过空间信息补充潜在空间。具体来说,我们替换具有某些实例感知系数的卷积层中使用的恒定填充(例如,通常为零)。通过这种方式,可以适当地适当地适应了预训练模型中假定的归纳偏差以适合每个单独的图像。通过学习精心设计的编码器,我们设法在定性和定量上提高了反演质量,超过了现有的替代方案。然后,我们证明了这样的空间扩展几乎不会影响天然甘纳的歧管,因此我们仍然可以重复使用甘斯(Gans)对各种下游应用学到的先验知识。除了在先前的艺术中探讨的编辑任务外,我们的方法还可以进行更灵活的图像操纵,例如对面部轮廓和面部细节的单独控制,并启用一种新颖的编辑方式,用户可以高效地自定义自己的操作。
translated by 谷歌翻译
利用通用神经结构来替代手动设计或感应偏见,最近引起了广泛的兴趣。但是,现有的跟踪方法依赖于定制的子模块,需要进行架构选择的先验知识,从而阻碍了更通用系统中的跟踪开发。本文通过利用变压器主链进行关节特征提取和交互来提供简化的跟踪体系结构(SIMTRACK)。与现有的暹罗跟踪器不同,我们将输入图像序列化,并在单支骨架上直接串联。主链中的特征相互作用有助于删除精心设计的交互模块并产生更有效的框架。为了减少视觉变压器中的减速采样的信息丢失,我们进一步提出了动脉窗口策略,以可接受的计算成本提供更多多样化的输入补丁。我们的SimTrack在Lasot/TNL2K上以2.5%/2.6%的AUC增益提高了基线,并获得了与其他没有铃铛和哨声的其他专业跟踪算法竞争的结果。
translated by 谷歌翻译
在监控视频中的异常检测是挑战,对确保公共安全有挑战性。不同于基于像素的异常检测方法,基于姿势的方法利用高结构化的骨架数据,这降低了计算负担,并避免了背景噪声的负面影响。然而,与基于像素的方法不同,这可以直接利用显式运动特征,例如光学流,基于姿势的方法缺乏替代动态表示。在本文中,提出了一种新的运动嵌入器(ME)以提供从概率的角度来提供姿态运动表示。此外,为自我监控姿势序列重建部署了一种新型任务特定的空间 - 时间变压器(STT)。然后将这两个模块集成到统一规律学习的统一框架中,该框架被称为运动先前规律学习者(MOPLL)。 MOPRL在几个具有挑战性的数据集中实现了4.7%AUC的平均改善,实现了最先进的性能。广泛的实验验证每个提出的模块的多功能性。
translated by 谷歌翻译
高效的时空建模是视频动作识别的重要而挑战性问题。现有的最先进的方法利用相邻的特征差异,以获得短期时间建模的运动线索,简单的卷积。然而,只有一个本地卷积,由于接收领域有限而无法处理各种动作。此外,摄像机运动带来的动作耳鸣还将损害提取的运动功能的质量。在本文中,我们提出了一个时间显着积分(TSI)块,其主要包含突出运动激励(SME)模块和交叉感知时间集成(CTI)模块。具体地,中小企业旨在通过空间级局部 - 全局运动建模突出显示运动敏感区域,其中显着对准和金字塔型运动建模在相邻帧之间连续进行,以捕获由未对准背景引起的噪声较少的运动动态。 CTI旨在分别通过一组单独的1D卷积进行多感知时间建模。同时,不同看法的时间相互作用与注意机制相结合。通过这两个模块,通过引入有限的附加参数,可以有效地编码长短的短期时间关系。在几个流行的基准测试中进行了广泛的实验(即,某种东西 - 某种东西 - 东西 - 400,uCF-101和HMDB-51),这证明了我们所提出的方法的有效性。
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Hyperspectral Imaging (HSI) provides detailed spectral information and has been utilised in many real-world applications. This work introduces an HSI dataset of building facades in a light industry environment with the aim of classifying different building materials in a scene. The dataset is called the Light Industrial Building HSI (LIB-HSI) dataset. This dataset consists of nine categories and 44 classes. In this study, we investigated deep learning based semantic segmentation algorithms on RGB and hyperspectral images to classify various building materials, such as timber, brick and concrete.
translated by 谷歌翻译
Point cloud analysis is receiving increasing attention, however, most existing point cloud models lack the practical ability to deal with the unavoidable presence of unknown objects. This paper mainly discusses point cloud analysis under open-set settings, where we train the model without data from unknown classes and identify them in the inference stage. Basically, we propose to solve open-set point cloud analysis using a novel Point Cut-and-Mix mechanism consisting of Unknown-Point Simulator and Unknown-Point Estimator modules. Specifically, we use the Unknown-Point Simulator to simulate unknown data in the training stage by manipulating the geometric context of partial known data. Based on this, the Unknown-Point Estimator module learns to exploit the point cloud's feature context for discriminating the known and unknown data. Extensive experiments show the plausibility of open-set point cloud analysis and the effectiveness of our proposed solutions. Our code is available at \url{https://github.com/ShiQiu0419/pointcam}.
translated by 谷歌翻译
The ability to record high-fidelity videos at high acquisition rates is central to the study of fast moving phenomena. The difficulty of imaging fast moving scenes lies in a trade-off between motion blur and underexposure noise: On the one hand, recordings with long exposure times suffer from motion blur effects caused by movements in the recorded scene. On the other hand, the amount of light reaching camera photosensors decreases with exposure times so that short-exposure recordings suffer from underexposure noise. In this paper, we propose to address this trade-off by treating the problem of high-speed imaging as an underexposed image denoising problem. We combine recent advances on underexposed image denoising using deep learning and adapt these methods to the specificity of the high-speed imaging problem. Leveraging large external datasets with a sensor-specific noise model, our method is able to speedup the acquisition rate of a High-Speed Camera over one order of magnitude while maintaining similar image quality.
translated by 谷歌翻译